20 research outputs found

    Exactness of Belief Propagation for Some Graphical Models with Loops

    Full text link
    It is well known that an arbitrary graphical model of statistical inference defined on a tree, i.e. on a graph without loops, is solved exactly and efficiently by an iterative Belief Propagation (BP) algorithm convergent to unique minimum of the so-called Bethe free energy functional. For a general graphical model on a loopy graph the functional may show multiple minima, the iterative BP algorithm may converge to one of the minima or may not converge at all, and the global minimum of the Bethe free energy functional is not guaranteed to correspond to the optimal Maximum-Likelihood (ML) solution in the zero-temperature limit. However, there are exceptions to this general rule, discussed in \cite{05KW} and \cite{08BSS} in two different contexts, where zero-temperature version of the BP algorithm finds ML solution for special models on graphs with loops. These two models share a key feature: their ML solutions can be found by an efficient Linear Programming (LP) algorithm with a Totally-Uni-Modular (TUM) matrix of constraints. Generalizing the two models we consider a class of graphical models reducible in the zero temperature limit to LP with TUM constraints. Assuming that a gedanken algorithm, g-BP, funding the global minimum of the Bethe free energy is available we show that in the limit of zero temperature g-BP outputs the ML solution. Our consideration is based on equivalence established between gapless Linear Programming (LP) relaxation of the graphical model in the T→0T\to 0 limit and respective LP version of the Bethe-Free energy minimization.Comment: 12 pages, 1 figure, submitted to JSTA

    The non-Verbal Structure of Patient Case Discussions in Multidisciplinary Medical Team Meetings

    Get PDF
    Meeting analysis has a long theoretical tradition in social psychology, with established practical rami?cations in computer science, especially in computer supported cooperative work. More recently, a good deal of research has focused on the issues of indexing and browsing multimedia records of meetings. Most research in this area, however, is still based on data collected in laboratories, under somewhat arti?cial conditions. This paper presents an analysis of the discourse structure and spontaneous interactions at real-life multidisciplinary medical team meetings held as part of the work routine in a major hospital. It is hypothesised that the conversational structure of these meetings, as indicated by sequencing and duration of vocalisations, enables segmentation into individual patient case discussions. The task of segmenting audio-visual records of multidisciplinary medical team meetings is described as a topic segmentation task, and a method for automatic segmentation is proposed. An empirical evaluation based on hand labelled data is presented which determines the optimal length of vocalisation sequences for segmentation, and establishes the competitiveness of the method with approaches based on more complex knowledge sources. The effectiveness of Bayesian classi?cation as a segmentation method, and its applicability to meeting segmentation in other domains are discusse

    Language acquisition and probabilistic models : keeping it simple

    No full text
    Hierarchical Bayesian Models (HBMs) have been used with some success to capture empirically observed patterns of under- and overgeneralization in child language acquisition. However, as is well known, HBMs are "ideal" learning systems, assuming access to unlimited computational resources that may not be available to child language learners. Consequently, it remains crucial to carefully assess the use of HBMs along with alternative, possibly simpler, candidate models. This paper presents such an evaluation for a language acquisition domain where explicit HBMshave been proposed: the acquisition of English dative constructions. In particular, we present a detailed, empiricallygrounded model-selection comparison of HBMs vs. a simpler alternative based on clustering along with maximum likelihood estimation that we call linear competition learning (LCL). Our results demonstrate that LCL can match HBM model performance without incurring on the high computational costs associated with HBMs
    corecore